Based on life-long observations of physical, chemical, and biologic phenomenain the natural world, humans can often easily picture in their minds what anobject will look like in the future. But, what about computers? In this paper,we learn computational models of object transformations from time-lapse videos.In particular, we explore the use of generative models to create depictions ofobjects at future times. These models explore several different predictiontasks: generating a future state given a single depiction of an object,generating a future state given two depictions of an object at different times,and generating future states recursively in a recurrent framework. We provideboth qualitative and quantitative evaluations of the generated results, andalso conduct a human evaluation to compare variations of our models.
展开▼